9 research outputs found

    Functional identification of biological neural networks using reservoir adaptation for point processes

    Get PDF
    The complexity of biological neural networks does not allow to directly relate their biophysical properties to the dynamics of their electrical activity. We present a reservoir computing approach for functionally identifying a biological neural network, i.e. for building an artificial system that is functionally equivalent to the reference biological network. Employing feed-forward and recurrent networks with fading memory, i.e. reservoirs, we propose a point process based learning algorithm to train the internal parameters of the reservoir and the connectivity between the reservoir and the memoryless readout neurons. Specifically, the model is an Echo State Network (ESN) with leaky integrator neurons, whose individual leakage time constants are also adapted. The proposed ESN algorithm learns a predictive model of stimulus-response relations in in vitro and simulated networks, i.e. it models their response dynamics. Receiver Operating Characteristic (ROC) curve analysis indicates that these ESNs can imitate the response signal of a reference biological network. Reservoir adaptation improved the performance of an ESN over readout-only training methods in many cases. This also held for adaptive feed-forward reservoirs, which had no recurrent dynamics. We demonstrate the predictive power of these ESNs on various tasks with cultured and simulated biological neural networks

    On the Trade-Off Between Iterative Classification and Collective Classification: First Experimental Results

    No full text
    Abstract. There have been two major approaches for classification of networked (linked) data. Local approaches (iterative classification) learn a model locally without considering unlabeled data and apply the model iteratively to classify unlabeled data. Global approaches (collective classification), on the other hand, exploit unlabeled data and the links occurring between labeled and unlabeled data for learning. Naturally, global approaches are computationally more demanding than local ones. Moreover, for large data sets, approximate inference has to be performed to make computations feasible. In the present work, we investigate the benefits of collective classification based on global probabilistic models over local approaches. Our experimental results show that global approaches do not always outperform local approaches with respect to the classification accuracy. More precisely, the results suggest that global approaches considerably outperform local approaches only for low ratios of labeled data.

    Mining structure-activity relations in biological neural networks using NeuronRank

    No full text
    Because it is too difficult to relate the structure of a cortical neural network to its dynamic activity analytically, we employ machine learning and data mining to learn structure-activity relations from sample random recurrent cortical networks and corresponding simulations. Inspired by the PageRank and the Hubs & Authorities algorithms for networked data, we introduce the NeuronRank algorithm, which assigns a source value and a sink value to each neuron in the network. Source and sink values are used as structural features for predicting the activity dynamics of biological neural networks. Our results show that NeuronRank based structural features can successfully predict average firing rates in the network, as well as the firing rate of output neurons reflecting the network population activity. They also indicate that link mining is a promising technique for discovering structure-activity relations in neural information processing. Ā© 2007 Springer-Verlag Berlin Heidelberg.status: publishe

    Ranking Neurons for Mining Structure-Activity Relations in Biological Neural Networks: NeuronRank

    No full text
    It is difficult to relate the structure of a cortical neural network to its dynamic activity analytically. Therefore we employ machine learning and data mining algorithms to learn these relations from sample random recurrent cortical networks and corresponding simulations. Inspired by the PageRank and the Hubs&Authorities algorithms, we introduce the NeuronRank algorithm, which assigns a source value and a sink value to each neuron in the network. We show its usage to extract structural features from a network for the successful prediction of its activity dynamics. Our results show that NeuronRank features can successfully predict average firing rates in the network, and the firing rate of output neurons reflecting the network population activity. Key words: Structure-function relations, link mining, machine learning
    corecore